12 research outputs found

    Do Gender Differences in Perceived Prototypical Computer Scientists and Engineers Contribute to Gender Gaps in Computer Science and Engineering?

    Get PDF
    Women are vastly underrepresented in the fields of computer science and engineering (CS&E). We examined whether women might view the intellectual characteristics of prototypical individuals in CS&E in more stereotype-consistent ways than men might and, consequently, show less interest in CS&E. We asked 269 U.S. college students (187, 69.5% women) to describe the prototypical computer scientist (Study 1) or engineer (Study 2) through open-ended descriptions as well as through a set of trait ratings. Participants also rated themselves on the same set of traits and rated their similarity to the prototype. Finally, participants in both studies were asked to describe their likelihood of pursuing future college courses and careers in computer science (Study 1) or engineering (Study 2). Across both studies, we found that women offered more stereotype-consistent ratings than did men of the intellectual characteristics of prototypes in CS (Study 1) and engineering (Study 2). Women also perceived themselves as less similar to the prototype than men did. Further, the observed gender differences in prototype perceptions mediated the tendency for women to report lower interest in CS&E fields relative to men. Our work highlights the importance of prototype perceptions for understanding the gender gap in CS&E and suggests avenues for interventions that may increase women’s representation in these vital fields

    Do test items that induce overconfidence make unskilled performers unaware?

    No full text
    When a person estimates their global (overall) performance on a test they just completed, low performers often overestimate their performance whereas high performers estimate more accurately or slightly underestimate. Thus, low performers have been described as 'unskilled and unaware' (Kruger & Dunning, 1999). However, recent evidence (Hartwig & Dunlosky, in press) demonstrates that low performers sometimes estimate accurately. What determines whether a participant estimates accurately vs. inaccurately remains unclear. Thus, the present research asks: What might participants use as the basis for their global estimates, and can it explain the accuracy of those estimates? One intuitive possibility is that participants use their response confidence in test items as the basis of their global estimates. A simple instantiation of this idea is described by the item-frequency hypothesis, which posits that participants compute the frequency of their high-confidence responses, and this frequency serves as an estimate of their global performance. A corollary of this hypothesis is that items that produce high confidence in wrong answers (i.e., false alarms, or FAs) will contribute to global overestimates, whereas items that produce low confidence in correct answers (i.e., misses) will contribute to global underestimates. Study 1 found preliminary support for the hypothesis, because the frequency of high-confidence responses on a typical trivia test was correlated with participants' global estimates, and the imbalance of FAs vs. misses predicted the accuracy of those estimates. To evaluate the hypothesis experimentally, Studies 2 and 3 manipulated the frequencies of FAs and misses that a trivia test was expected to yield, and participants were randomly assigned to receive one of the tests. Tests designed to yield many FAs (relative to misses) produced global overestimation, tests designed to yield more misses (relative to FAs) produced underestimation, and tests designed to yield a balance of FAs and misses produced accurate estimation. Thus, the selection of test items affects global estimates and their accuracy. The imbalance of FAs and misses could not explain all individual differences in estimation accuracy, but it nonetheless was a moderate predictor of global estimation accuracy
    corecore